Skip to main content

Round Lifecycle

Rizenet implements federated learning, which means the collective model learns from data that never leaves the owner’s infrastructure.
Instead, each round follows a predictable cycle in which only model parameters and evaluation scores travel across the network.

Quick Primer on Federated Learning

  • Local first: every participant (“trainer”) keeps its raw data private and trains a copy of the global model on-site.
  • Secure aggregation: the resulting model updates—not the data—are encrypted, signed, and sent to an on-chain aggregator.
  • Iterative improvement: by repeating the cycle, the shared model converges toward higher accuracy without compromising data privacy.

Rizenet Round Cycle

  1. Trainer Selection:
    The aggregator chooses a subset of whitelisted trainers for the next round, balancing factors such as availability, geographic diversity, and past contribution quality.

  2. Local Training:
    Selected trainers download the current global model and train it on their private datasets.

    • Raw data stays inside the organization’s firewall.
    • Trainers may apply differential privacy noise locally for added protection.
  3. Generate Aggregations:
    Trainers sign and upload their model deltas.
    The aggregator produces one or more candidate global models (e.g., using FedAvg, robust median, or secure aggregation).

  4. Peer Evaluation:
    The candidate models are broadcast back to the trainers, who evaluate them on local validation sets and return performance scores.
    Because every trainer supplies its own independent assessment, this step keeps the evolution of the model unbiased. No single party can unilaterally decide which aggregation is accepted.

  5. Compute Trainer Contribution:
    Using the evaluation results, Rizenet calculates each trainer’s marginal impact on model quality (an approach inspired by Shapley value approximations).
    These contribution scores are recorded on-chain.

  6. Compensation Distribution:
    Trainers receive tokenized compensation proportional to their verified contribution—either newly issued model-ownership tokens or another designated reward asset.

  7. Privacy Tuning:
    Before the next cycle, each trainer can adjust its differential-privacy settings, trading off a bit of noise for improved influence (or vice-versa) based on the previous round’s outcome.

Why This Matters

  • Privacy by design: sensitive data never leaves institutional control.
  • Fair evolution: decentralized peer evaluation prevents single-party bias in model selection.
  • Transparent incentives: every contribution and compensation event is publicly auditable on-chain.
  • Adaptive security: differential-privacy tuning lets organizations choose the privacy/utility balance that suits their policies.
  • Continuous improvement: the iterative cycle keeps the model evolving as new data and trainers come online.

With this lifecycle, Rizenet turns scattered private datasets into a single, ever-improving intelligence layer—without asking anyone to give up custody of their data.